Hacker News new | threads | past | comments | ask | show | jobs | submit DanielBMarkham (43712) | logout
Designing Low Upkeep Software (jefftk.com)
393 points by mhb 3 days ago | flag | hide | past | favorite | 246 comments





People think if a project on github hasn't been updated since 3 months then it has been abandoned!!

Like, why don't we just let projects be "done"? Things don't need to be maintained and updated for eternity.

In my mind, the best software engineering is where you solve a problem once and your solution just works and needs no configuration or maintenance or updates.

This of course has a very low chance of happening if your system has to exist as a part of "ecosystem" where you expect/assume the presence of some external service that can change its API on a whim (or just disappear).

Yes, to a certain degree it's impossible to design software that does not exist as a part of an ecosystem, which is why I put it in quotes.

Some APIs are stable and are guaranteed to continue to exist for a very long time: CPU instruction sets, networking protocols (IP/UDP/TCP), operating systems (AFAICT: Linux (the kernel) works hard to not break user programs, and Windows is kind of known for bending over backwards to maintain compatibility with old programs), file systems, etc.

What I'm advocating for here requires that your program be compiled into a native executable binary file, and it must embed all its library dependencies (aka static linking).


Unless you're doing something mathematical, where you can exhaustively test every input and output and ship something that's essentially close to the platonic ideal of the thing, you're going to need to update.

Protocols change, programming languages change, human languages change, boarders change, definitions of time change, laws change, sensors change, and on and on.

It's like asking why a person can't be just done learning so they can live their life. Well, they can, but pretty soon the world is inaccessible to them. They can't use a computer or a phone because they stopped learning in 1995 and they're utterly dependant on others to do things for them.

But I will say this, I've long been thinking that there should at least be a programming language and OS that does its best to not change. A sort of whole environment where every single piece that's out of beta commits to minimal interface changes over time. Fixing security bugs and supporting new emojis, fine. We have to do stuff like that. But everything else is just as frozen as can be. It would be useful for super long term software. If we have buildings still around from 100 years ago we should be able to build for 100 years from now without a team around for constant maintenance. Though I do not think it can ever be maintenance free.


> But I will say this, I've long been thinking that there should at least be a programming language and OS that does its best to not change. A sort of whole environment where every single piece that's out of beta commits to minimal interface changes over time

At the risk of this being interpreted as trolling; don't we have this, for crucial OS interfaces at least? Linux is famous for "we do not break user space" (yes, I know it's not 100% true, but it's closer than anything else I've seen), and afaik the posix standards are pretty stable too.

The C language also has a stable ABI, which is basically the ffi for most languages, and most static libraries that were developed decades ago can link just fine.

In my mind, the problem is caused by at least two things:

- Languages that are not compiled to machine code have no incentive to have a stable API or ABI. This has the effect that code reuse now necessitates replicating the state of the machine it was developed on (Which might be using a different language version, runtime, etc).

- Programming culture has progressed to a "get it done quickly, just grab a library" culture. This is not to say you should develop everything in-house, but on a spectrum, I have the hunch that this easy accumulation of dependencies induces a culture that does not vet the stability of the dependencies. Once one of your dependencies is unstable, the project on which it depends cannot be stable.


> At the risk of this being interpreted as trolling; don't we have this, for crucial OS interfaces at least?

For crucial OS interfaces, yes. For the ecosystem of libraries and packages, no. But ultimately Linux is more than just crucial interfaces. The ecosystem of applications and libraries that we need to get anything really useful done does constantly change, and it would be nice if there was a OS + programming language + culture for "forever apps" that are designed to work for centuries without a material risk of an auto-update breaking anything.

Sort of how Rust is designed around safety, that's what would be nice. I know it wouldn't be perfect, for the reasons I listed above, but for the areas where we are at least trying to have things work for good I think it would materially help.


> The ecosystem of applications and libraries that we need to get anything really useful done

I'd argue that you can get lot of useful things done with plain POSIX/C

> a OS + programming language + culture for "forever apps"

Isn't POSIX + C exactly that? Sure, not many people stick within those bounds, but those who do tend to care a lot about not breaking stuff.


yeah, except POSIX and C also aren't set in stone, also evolve, deprecate stuff, remove features, just like everything else.

Yeah, C and Java have had non-breaking changes for years. Java code from 1995 can still compile and execute in Java 17 (2021).

I imagine COBOL is similarly stable but I have no experience there.


> I imagine COBOL is similarly stable but I have no experience there.

I haven't programmed in COBOL, but it is quite stable. One of the really nice things in COBOL is the "Environment Division", which has a Configuration Section, which provides information about the system on which the program is written and executed. It consists of two paragraphs − – Source computer: System used to compile the program. – Object computer: System used to execute the program.

Another remarkably stable language is Ada, I have used it to compile non-trivial programs from 30+ years ago, on a different architecture than it was originally written for (granted, without system dependency) using a modern compiler without anything more than (1) renaming identifiers which had been made keywords, (2) splitting files due to GNAT's implementation limitation regarding compilation-units.


Ya I was thinking java is closer to the mark as well, even coming from someone who has never programmed in java... But I have long thought that it would be nice if there was some sort of universal standard for pseudo code that could be compiled into other languages.

Meanwhile Python: "Yeah if you could rewrite your entire codebase every few years because of some changes nobody fuckin asked for, yeah that would be great. Oh and by the way, we're dropping support for non-latest versions of the language lmao."

Except they have, just not to the same extent as some other languages.

K&R C won't compile in modern compilers, not is allowed as per ISO C2X upcoming standard.

gets was removed.

And if you are using optional Annexes, they might not even exist in ISO C compliant compilers.

Similarly, that Java code will die if it uses internals made private in Java 9, inherits from JDBC and has methods with names that were later added to more recent versions, uses deprecated methods that were finally removed around Java 10 time,...


But Java 8 -> 16 doesn’t work without changes.

Completely depends. You can write Java in 2021 that works with both old and new Java, especially if you don’t use any dependencies.

Sure and you can write code that is valid C and valid shell- but there have been major breaking changes to Java (iirc Minecraft barely started working with Java 16 this year).

You have to distinguish Java the language and the JVM: Class files generated by any (?) version of the Java compiler before the JVM version you’re using will usually work on the newer JVM. Java 9+ started removing classes, breaking API compatibility: but, generally, there are additional jars/command-line options that can be used to restore the older behavior.

Minecraft, in particular, has always worked fine with Java 9+, ime: if you know the flags to apply (and delete the jar that checks the Java version), it more or less just worked. I’ve been running it for a year+ on new JVMs so I could use the Shenandoah GC and take advantage of more of the 128GB RAM in my desktop.


Not any longer, as per Java 17, people have had enough time to port their code, and those switches now "do nothing".

This is the last step before they get fully removed.


None of those changes are required for use, and more importantly, nothing has been DROPPED to make old code obsolete.

This isn’t exactly true: javax.xml.bind-related ClassNotFound errors are a fairly common problem when running old Java applications on new JVMs. Thankfully, the solution is just to include the relevant jar that implements the classes that have been removed.

Depends, some libraries work, some need special parameters, some don't work at all

Good luck running a binary from 2005 on a modern Linux distro.

GitHub was built to host source code though.

> a programming language and OS that does its best to not change

you've heard of this thing called Windows?


Yes. Windows is extremely backwards compatible. This could be, ironically, one of the reasons some people don't like it too much. See Raymond Chen's The Old New Thing [1] for some technical details.

[1] http://ptgmedia.pearsoncmg.com/images/9780321440303/samplech...


The latest Windows update won't install for me. It pops up an error saying "Uninstall this app now. It is not compatible with Windows 10."

Yes, it says "now," in command form. And the app, VirtualBox, works just fine in Windows 10.

Doesn't seem very backwards compatible these days.


If the program manifest doesn't advertize compatibility with the current version of the system, its window will be scaled.

Since when do windows break most programs every few months? Having to write a patch every few years when Microsoft releases a new OS is pretty low maintenance. And most of the time windows is backwards compatible nowadays, so usually you don't even need to do that.

Most 10 year old games still runs on modern windows even without patches etc.


> The C language also has a stable ABI, which is basically the ffi for most languages,

C is honestly terrible to target/use as FFI, doing so precludes doing things correctly, or more advanced things like... say arrays that "know their own length" or numeric-types that are range-constrained.


>say arrays that "know their own length"

There's a fair amount of C code that does just that and does it for a long time.

>or numeric-types that are range-constrained.

Those don't need astral and can be passed through C ABI just fine.


>>say arrays that "know their own length"

>

> There's a fair amount of C code that does just that and does it for a long time.

No, there isn't.

There can't be because of how arrays in C degenerate into pointers/addresses; see Walter Bright's "C's Biggest Mistake" -- here: https://www.digitalmars.com/articles/C-biggest-mistake.html

>> or numeric-types that are range-constrained.

> Those don't need astral and can be passed through C ABI just fine.

No, they can't.

If you're passing a "Positive" through C's ABI you lose the fact that the value can be neither negative, nor zero. (Unless you mean "passed through" as in, "not mangled", but this is setting the bar so low as to be laughable.)


No building stands for 100 years without maintenance. 100 years ago heating and light by coal or gas were still common, so every single old building has had upgrades to electricity (or first gas, then electric). The roof leaked, the wood rotted, a bunch of people died in a fire and now all buildings in a similar style need to add a fire escape...

There are hundreds of churches in Europe who have stood for centuries without the "maintenance" you are talking about. You are talking about upgrades, not maintenance. Adding gas and electricity are modern (< 150 years ago) things, as are earthquake codes, fire escapes, etc. I'm not suggesting people go live in cathedrals, but there's a huge hole in your reasoning.

Every old church I've ever been in has had some kind of donation drive on the notice board asking for help replacing missing tiles, rotting roof beams etc.

I don't know if the British were just exceptionally bad church builders and contemporary European churches are doing just fine but I would hazard a guess that no, they also require spectacularly large amounts of maintenance to keep the weather out too.


True, but this is preventing a kind of decay that software is, by its digital nature, immune to. Information doesn't weather or rot. There's no software equivalent to "missing tiles" or "rotting roof beams".

“Link rot” and “bit rot” are terms that have entered our common vocabulary. I think information gets weathered down by contact with humans, who are an incredible source of entropy.

Then consider NASA Voyager. It's been floating around the galaxy for 44 years. If all goes well, its onboard computer won't run out of power until October 10, 2166.

Yes, this kind of upkeep is definitely required. Replacing little things as they break and keeping an eye out for issues. Old things require this upkeep, which is a different thing from rebuilding, replacing, or upgrading. But the goal of upkeep is to keep the thing in good shape, and usually without any visible change at all, rather than transforming it into something different (assumedly better, but not really). Upkeep is preservation, conservation. We've somehow forgotten how to do that for the vast majority of things.

Hell, here in Sweden there lots of old houses that are in dire shape BECAUSE they received “maintenance” to keep them modern, resulting in problems with mold and insects.

The parallel to breaking working code while updating it to follows modern standards isn’t too far fetched.


Both here and in code, upgrading to keep up with progress of technology is good. Upgrading to keep up with progress of business - aka. ways to provide less value for more money - is bad. Old technology was often immune to value engineering simply because people didn't knew how to cut corners with it.

Ime, “upgrading to keep up with the progress of technology” is often code for “I haven’t learned the paradigm behind this legacy code base, but I know it’s bad”. I’ve spent so much time arguing against fad-driven development where people insist on rewriting year-old redux apps to use hooks because “hooks are the future”. At this point, I more or less reflexively oppose anyone who argues for a new technology just because it’s new.

>"Protocols change, programming languages change, human languages change, boarders change, definitions of time change, laws change, sensors change, and on and on."

I have 17 years old Windows desktop product that I've long abandoned. But it is exactly this: single exe with everything compiled in. It uses DirectX 9 and Directshow. It still runs fine without any tweaking. It even compiles fine from source code. So from a practical standpoint it is maintenance free.


> But I will say this, I've long been thinking that there should at least be a programming language and OS that does its best to not change.

Believe it or not this is a core design goal of the Urbit ecosystem. The core bytecode should be simple (and stable) enough that some future archaeologist can implement an interpreter in a weekend and run programs. They take it so far that their version numbers run _backwards_, with the idea that once they reach version 0 nothing will ever change again: https://urbit.org/blog/toward-a-frozen-operating-system


> But I will say this, I've long been thinking that there should at least be a programming language and OS that does its best to not change. A sort of whole environment where every single piece that's out of beta commits to minimal interface changes over time. Fixing security bugs and supporting new emojis, fine. We have to do stuff like that. But everything else is just as frozen as can be. It would be useful for super long term software. If we have buildings still around from 100 years ago we should be able to build for 100 years from now without a team around for constant maintenance. Though I do not think it can ever be maintenance free.

you can fire up QBasic on dosbox just fine


The last update to MS-Dos was over twenty years ago. I'm longing for something more than mere backwards compatibility.

So you want "there should at least be a programming language and OS that does its best to not change", and you also want updates for it?

FreeDOS is regularly updated

> Unless you're doing something mathematical, where you can exhaustively test every input and output and ship something that's essentially close to the platonic ideal of the thing, you're going to need to update.

You only need to update dependencies that interact with things that change autonomously.

For the types of programs I write, that describes very few, sometimes none, of my dependencies.

Continuous updating of dependencies is valuable in some circumstances. Not all.


> there should at least be a programming language and OS that does its best to not change. A sort of whole environment where every single piece that's out of beta commits to minimal interface changes over time

A passable implementation of Lorie's UVC concept[1] is sitting right in our lap, it's just hardly ever put to use in this way, and it's considered by some not to be very sexy—or rather, it's popular to dump on it for various reasons. But you're probably using it right now to read this comment, and the reason is that its availability and reliability is a foregone conclusion by every stakeholder in this conversation—whether that be me, you, or our hosts involving Y Combinator &co. And with respect to the article, it's a foregone conclusion that it will also be involved in the tech tree of the example projects given. Jeff even mentions its longevity explicitly—in a comparison to Python, where Python is considered the worse of the two. How about just not depending on Python at all?

If you click through to Jeff's "makerss.py" script and look at the kinds of machinations that it does, consider whether this stuff could instead be taken care of directly in a "README.html" in the repo root. No implicit[2] assumptions about what software is installed on the machine in question—since it's using the UVC after all—and none of the Python 2/Python 3 fuss. (It'd have to forego the symlinking that makerss.py does near the end (currently line 1717 and 1719), but that should arguably already be handled elsewhere, anyway.)

1. https://news.ycombinator.com/item?id=11512892

2. https://news.ycombinator.com/item?id=22991033


> consider whether this stuff could instead be taken care of directly in a "README.html" in the repo root

I'm not sure what you mean?


From <https://news.ycombinator.com/item?id=28407936>:

> If folks were really committed to improving the developer experience, then instead of what we do now [...] development would work like this: ¶1. Download the project source tree ¶2. Open README.html ¶3. Drag and drop the project source onto README.html

If this is still unclear, please let me know. (Please also let me know if it is clear. I'm very interested getting this explanation "right", so it's easy to understand.)


Software would be incredibly "low upkeep" to last as long as HTTP. You don't have to use a new programming language just because it exists. Time zone info is a data file managed by the OS and has no business being baked into specific programs. Translations are likewise data files and should be modular so that you can add or replace them without invalidating the program. Sensors change much slower than software; the oldest and crustiest operating system image you'll ever meet is the one coupled to a multi-million-dollar sensor in a lab or a hospital.

We perhaps need a LongNow Smartphone.

> Like, why don't we just let projects be "done"? Things don't need to be maintained and updated for eternity.

This is generally why I opt for "single-file" libraries that do one simple task well. The smaller the library, the more likely it is "done". For example, do I want some insanely complex image library that handles every file format under the sun, or do I just want some basic one that allows me to output a simple JPEG?

I often find myself referring to "single_file_libs" repository: https://github.com/nothings/single_file_libs

Looking at the open issues, it doesn't appear to be actively maintained but it's still an incredibly good resource for "completed" projects.


> Looking at the open issues, it doesn't appear to be actively maintained

I'm not sure if this is intentionally ironic or not, but it does seem like if your small libraries aren't getting updated regularly because they are done, you at least want the meta library (in this case the single file libs repository) to be updated regularly with new small libraries.


> [..] you at least want the meta library (in this case the single file libs repository) to be updated regularly with new small libraries.

It's the meta-library collection repository I was talking about, not the single-file libraries.


Funny you chose to say jpeg, because jpegxl is just on it's way to getting big. If you're dealing with a lot of images the savings are probably big enough to warrant an update.

There really is nothing that can be considered completely stable.


>People think if a project on github hasn't been updated since 3 months then it has been abandoned!!

People are too quick to assume that all Github repos are meant for public consumption. Can only speak for myself really but most of what I've pushed there was designed with only me in mind. If others find it useful that's nice but don't expect me to accept pull requests.


Have you considered a "no maintenance intended" badge on those repos?

https://unmaintained.tech/


This is not at all close to what I'm advocating for.

The page says:

> Open Source is rewarding- but it can also be exhausting.

> The linking project’s code is provided as-is, and is not actively maintained.

Basically the attitude here is "I'm tired! Leave me alone!".

The attitude I'm advocating for is "This is feature complete! We only update this project for bug fixes or security patches."


I am suddenly reminded of this earlier discussion:

https://news.ycombinator.com/item?id=28263721


This is precisely why I have a private git repo.

It would be sad if people could only chose between "Public and I commit to maintain this for all of you" and "Private and it's just for me". What's wrong with "This is public but it's still for me, you're welcome to use it if you want to but don't expect anything from me".

This is what open source is really about for me.


I agree with this so much that I strongly feel that it should actually be an repos option as it clearly lays out the responsibilities of all parties should they choose to use the code.

It's almost like we should have some sort of text we can all copy paste in our repositories to indicate this sort of thing.

Mine will always contain: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ...

And then maybe we should add something about that people can actually use it if they want to: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files ...

Should try to start this is an effort for real!


I usually just put some text to the extent of "this code was written by myself for myself and I'm really not looking for pull requests" in the README.

If you get tired of repeating yourself, do what I wrote about in https://news.ycombinator.com/item?id=28663996 and just use the MIT license. It has all of this outlined explicitly already.

I see what you mean, but in my experience most people will at least skim the README while they ignore the license entirely. So, by putting it in a clearly visible position in the README (say just below the repo title) I catch a few extra people and prevent them from sending in issues/PRs in the first place.

It's also not like it's a lot of work, a 30 seconds one-time investment in my time is less time than I spend on most HN comments.


Totally agree, open source is about sharing code, designs, solutions to problems, etc. not running, maintaining, and and promoting a project.

Some open source efforts are geared towards running, maintaining, and promoting a project.

Adoptopenjdk comes to mind. https://adoptopenjdk.net/


This seems like a volunteer project _around_ open source rather than open source. It may seem pedantic but it feels useful (to me at least) to separate the act of sharing and collaboratively improving code from the projects and politics that often get built around that.

This is one reason I really like Common Lisp and Clojure: both communities have developed a mindset that allows libraries to be done and, as a result, I notice a lot less churn than in other languages.

I also like them for the same reason. The languages themselves evolve slowly (Clojure) or not at all (Common Lisp). A library that emphasizes simplicity and does One Thing Well can be stable and continue to be useful indefinitely. There are network effects too -- if my library uses stable libraries and is built on a stable language, it need not change at all.

On the other hand, there are the big libraries that, for convenience, pull one or two small things from several others -- their surface area becomes so large they are practically guaranteed to generate more work in terms of security updates, dependency conflicts, and the like. I experience this at times developing in Clojure, whose hosted nature means issues can emerge from the Java ecosystem, which is obviously enormous and dynamic.

We recently had a defect at work relating to a conflict between transitive (Java) dependencies from a third party library used to serialize exceptions prior to transmission to their service, and another tool used only for generating "literate"-style code documentation. While not a production defect, it cost us some time and could have been avoided (both by us and the libraries' authors) by more hard-headed evaluation of the tradeoffs involved in introducing certain dependencies in the first place.


I would use Occam's razor with this idea and posit that it is much more like that its a lack of overall mindshare with functional programming than with the nature of language abstractions.

Don't tell me that Lisp ever achieved cool-kid status that a thousand different libs wouldn't pop up basically overnight.


The thing about allowing libraries being done is a bit more nuanced than not having lots of different libraries doing the same thing. There are lots of competing libraries even now and existing libraries do get surpassed in popularity. But they keep working and don't become unusable or problematic as often as in other cultures.

I don’t think this is entirely right: there are plenty of languages with similarish mindshare that don’t have this: just building a Haskell project, for example, can be a nightmare if they hasn’t been maintained. Scala also has this issue, despite having a similar marketshare as Clojure.

If the premise is that you limit your dependencies have low maintenance code, you'd probably not include many (any?) of said thousands of libs. Also it isn't about the number of libs, but how often they change.

But what doesn't change?

Network protocols? No, HTTP/3 is here.

OS? Kernel, sure, but everything else no. E.g. most Linux OSs recently changed their entire init system.

CPU architectures? ARM was irrelevant a few years ago. Faster CPU instructions are arriving every year. You say, "okay, well just recompile." Which brings us to....

Build tools? These are a nightmare to make stable in the best of circumstances.

It's all fiction. Code rot is real.


http is not a networking protocol in the sense I meant. It's an application level protocol. Although you can still write a server that implements only http1.1 and it should still work fine.

A lot of the things that you describe as changing don't really need to change.

Operating Systems? Windows XP was probably the best windows, and it's from nearly 20 years ago. Looking at Windows 10, what exactly does it bring to the table? It's a much worse experience overall.

Compilers don't need to change either.


How often are changes in your web app directly warranted by a new init system or HTTP protocol? Does your code directly depend on what init system is running it, or somehow become unusable if it doesn't understand the latest HTTP version?

If yes, just what the hell are you making?


Yes to the first question. My SystemD-based web server doesn't work in systemd. (Well the web server itself probably works, but the packaging needs to change.)

As for the second, code can still work and yet be obsolete.


Typo: "my system v web server..."

I have a side project that I haven't updated in years. I haven't needed to because Windows is fantastic at backwards compatibility.

Also the website automatically handles payments and sending license keys.

The only work I do is when PayPal integration breaks because they make some minor change on their end.

Unfortunately, Windows 11 removed the IDeskBand interface I rely on so I might actually have to do some work.


Curious, do you limit yourself to the C win32 api on Windows? I hear that low of level never really changes, and you can do all the stuff the newer windows gui libs can do.

The answer is in Lehman's laws of software evolution. Most systems "must be continually adapted or [they] becomes progressively less satisfactory".

The argument is simple. The world changes all the time and most software interacts with the world. Therefore, software needs maintenance in order to keep up with the changing world.


Yeah, but the rate of that change makes all the difference.

Consider biological systems: they're constantly evolving, constantly in flux - and yet, all living things we can see with unarmed eyes are fixed in the scope of a human lifespan. Most of those don't show meaningful changes withing lifetime of a human society. Cooking recipes of our great grandparents are still valid today, because fruits and vegetables and meat didn't change that much[0] over the last century. Wood still works the same way it did 1000 years ago.

Being able to change much faster is good - as long as the changes tend to lead in useful direction. Software seems to be changing way faster than it's needed, because it's changing for the wrong reasons - instead of improving the value it provides, it's usually about one-upping competition, or forcing recurring payments, or forcing competitors to waste money (e.g. by introducing incompatibility on purpose). Software industry resembles a rain forest, in terms of both diversity and frequency of senseless murder (er, "survival of the fittest"). But as humans, we've dominated this planet by being smarter than that, by being more efficient than natural evolution.

--

[0] - Despite being optimized directly by our industrial processes.


Software close to the user have this need for adaption.

However, my experience is that the building blocks for making even large software systems are increasingly more stable and Debian GNU/Linux is a prime example of this.


That's a nice, succinct way to put it. Software that doesn't interact with the world can be finished. Software that does ends when the world does.

I suppose then that a benefit of pure functional programming is that you can statically analyze whether a given piece of software is plausibly "done" or simply unmaintained, depending on whether and to what extent it uses monads.


> People think if a project on github hasn't been updated since 3 months then it has been abandoned!!

This a heuristic that generally works. Most projects that don't have recent commits really are abandoned. And when you open a project on github, one of the first things you see is when every file was last updated. Maybe github should offer other heuristics, or of it already has them, display them more prominently.


I don't necessarily disagree, but this is the cultural problem I'm sort of pointing the finger at.

Packaging your software into a container-image accomplishes almost exactly the same thing, without the need to be using a stack that can create standalone static binaries.

I would point out, though, that even when you can just continue to use a single pinned version of a Docker image forever (because the program itself is stable, or is part of a stable system that only uses the program in precise ways), people still value regular releases, and see non-upgrading images as “rotting” — because they can contain vendored deps, and those deps can have security vulnerabilities discovered in them over time.


Docker is especially bad for this. A jar file that runs today will probably run fine in 10 years on future Java. The container might, but it almost certainly has a huge number of security vulnerabilities, and maybe no one uses AMD64 and ARM anymore.

I haven't worked a lot with Docker or similar containerization tools, but my impression is they require you to have an internet connection to download stuff.

At least with Docker, when you write or download a docker file for the first time, it's just a description of stuff that needs to be downloaded and maybe commands that need to be executed.

This kind of thing is qualitatively much worse than a statically linked executable binary in terms.

> a stack that can create standalone static binaries.

Yea, a language that has the notion of a compiler is a MUST if you actually care about this kind of thing.


they're specifically talking about the resulting image built using whatever dockerfile you had. they can be exported to a file or pushed to repos to be reused elsewhere.

It'd be useful to have a way to tell visitors this project still works without having to push commits to refresh the last updates timestamp.

I do this in the README. Include a "state of the project" section.

Seeing no updates in 3 months is one thing. Seeing no updates on a project that describes itself as a work in progress is another thing.


Isn't this why we have CI/CD and scheduled pipelines?

Seems like a waste if it's just doing the same work all over again just to update a timestamp. A waste that someone has to pay for.

You are right; it's because the "modern" CI/CD is ill-designed, thanks in part to being "generalized" to handle text and "generalized" tools like C compilers and makefiles... instead of, you know, working in the actual problem-space.

See this: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.26....


> Isn't this why we have CI/CD and scheduled pipelines?

I mean, kind-of. One of the problems with CI/CD though is that they're poorly designed for the job they do; see this: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.26....

The TL;DR here is that by structuring your code and storing it in a database structured hierarchically you can solve CI/CD in a MUCH nicer manner: your root node's history becoming a history of the project in a compilable state.


I evaluate the liveness of a github project based on the responses to questions (latest post, time to reply). Not by when the last change was made.

"Art is never finished only abandoned"

"Klingon software is not released, it escapes"

> What I'm advocating for here requires that your program be compiled into a native executable binary file, and it must embed all its library dependencies (aka static linking).

the super example of this is justine's project

https://github.com/jart/cosmopolitan

it's pr fucking cool

> Cosmopolitan Libc makes C a build-once run-anywhere language, like Java, except it doesn't need an interpreter or virtual machine. Instead, it reconfigures stock GCC and Clang to output a POSIX-approved polyglot format that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS with the best possible performance and the tiniest footprint imaginable.


Anything that connects to or supports something that connects to the internet eventually gets used as some part of vulnerability, and hence needs to be updated.

But, I agree with your sentiment. Endless feature bloat is not necessary for vast majority of software. The expectation that every software must remain updated forever is just not sustainable.


I completely agree with you, but your last point is exactly why recent updates are a proxy for ‘well-maintained.’ Software is so complex that it _has_ to have dependencies, and even if you vendor them / compile to an executable, that doesn’t account for upstream bug or security fixes.

1) If you buy into this concept, you would naturally limit your dependencies to the bare minimum.

2) Static linking does a good job of protecting against when upstream introduces new bugs or security holes.


Static linking also protects you from security patches.

I like writing low-dependency software, but what that means is that I pick a distribution to build on and use their packages. If a library is not packaged, it has to be really good for me to depend on it.

Sure, it might break once in a blue moon depending on how stable the distribution you use is, and you'll need to refresh your stuff every 4-9 years when a new version of the base OS is released and the old one goes EOL, but most of the time the result will be extremely low-maintenance.


> limit your dependencies to the bare minimum

I get this desire, but likely it guarantees more work for yourself and probably more undiscovered bugs, because with only one set of eyes, to invert Linus’ law, many bugs may lurk deep.


Have you actually tried to reduce dependencies?

I mean, are you speaking from experience, or just theorizing?

Here's something to keep in mind: when you depend on a huge library that has 40 features, it's likely your program only needs 5 of them.

So, while the huge library benefits from "many eyes", your particular program would benefit much more from not depending on a library with a lot more features than you need.

Also I don't know if you've seen how open source works, but chances are the number of people actively looking at the source code for these libraries to uncover bugs or security holes is very small to non-existent.


You don’t need to be condescending. We all know that using dependencies saves time, and it’s not possible to build any practical software without them. An OS is a dependency. A GUI library is a dependency. A database is a dependency.

Sure, you can limit them, but you can’t live without them. That’s not theorizing, that’s the whole industry’s collective wisdom.


What part of my comment was condescending?

> That’s not theorizing, that’s the whole industry’s collective wisdom.

There's no wisdom there. The industry is too young to have any meaningful wisdom to speak of. It's mostly fashion driven.

> An OS is a dependency. A GUI library is a dependency. A database is a dependency.

Well, I did make a distinction in my original post about stable dependencies, and I included the OS as a stable dependency.

I also never said you should have "zero" dependencies. My point is about minimizing dependencies as much as you can.


This was the condescending part:

> Have you actually tried to reduce dependencies?

> I mean, are you speaking from experience, or just theorizing?


Why is it condescending?

I asked this question because your comment does not jive with my experience. So before countering your point I wanted to confirm whether you had some experience that was opposite to mine or not.

Where is the condescension? In my mind I'm being polite and not making assumptions.

If I wanted to be condescending I would tell you that you have no idea what you're talking about.


First, you weren’t condescending to me, you were condescending to someone else. Second, saying “you have no idea what you’re talking about” is aggressive, not condescending.

“Have you actually tried X?” is condescending, because it places you in a position of superiority, framed as an “honest” question. If in your mind that is a polite and honest conversation, your mind is clouded, as me an other people have now let you know.


TBH, opening your post with "Have you tried X?" _does_ read like a very short step from "you have no idea what you're talking about", whether you meant it like that or not.

But you can choose dependencies which has been stable for years. Like OS calls and libraries with a long stable track record. Those are good enough to be productive.

If you're sincerely interested in a thoughtful discussion, you should dial back the smug tone in your posts.

Wow, so you ask me for credentials, accuse me of theorizing, and then by the time you finally get around to addressing my point, you hypocritically theorize with “it’s likely” and “chances are”! Well done, I’m not wasting my time on you just like I wouldn’t by rolling my own logging library for the umpteenth time. Good luck out there.

I didn't ask you for credentials nor accuse you of anything. It was an honest question.

I disagree. How many times does software use 3rd party dependencies when the standard library would have sufficed with a bit of extra effort? With python, golang, and others the standard library should cover most cases

Go is pretty darn good at avoiding breaking changes to the standard library. But, um, do you remember Python 3? You just proved my point with that one, in your own words.

> Like, why don't we just let projects be "done"? Things don't need to be maintained and updated for eternity.

I agree with this but

> native executable binary file

This is hard because new architectures, instructions, optimizations show up every few years.


> Like, why don't we just let projects be "done"? Things don't need to be maintained and updated for eternity.

As the Buddha taught, desires are numberless. I don't think I've ever even heard of a piece of software that hit 1.0 and people were like, "Welp, it's perfect, never change a thing." I've certainly never worked on one. Indeed, the biggest sign of project success to me is releasing an initial version and people being shocked, shocked at the obvious features I missed. And I love it, as that's proof that they're really trying to use it for their real-world needs.


"In my mind, the best software engineering is where you solve a problem once and your solution just works and needs no configuration or maintenance or updates."

https://cr.yp.to/software.html


> and Windows is kind of known for bending over backwards to maintain compatibility with old programs

Maybe they used to, but good luck running anything made for Windows XP on current version of Windows. But you will have a better luck if you run it with Wine on Linux.


I have an application I developed for a client starting in 2005 using Delphi 7; my VM is based on Windows XP, and the app worked fine on Windows 7 and is now used on Windows 10. I had to update one setting in an external configuration file for slight differences in the ADO driver, but other than that I haven't rebuilt the executable for years.

I was genuinely pleased for the client as they are nice people running an essential service on a budget that is always under threat, and this result means not needing to pay for a silly 'fresh' version of the application.


How many security features inherent to 64bit processes are they missing? Or is that not relevant for their deployment?

No, no relevant in this case.

> Yes, to a certain degree it's impossible to design software that does not exist as a part of an ecosystem, which is why I put it in quotes.

I’m doing a big embedded project and all the code is self contained! The only inputs are physical user inputs.


Well, bugs do get found, and continual new demands from humans make more optimizations necessary. SemVer is the compromise between stability and continual improvement, and I think it works pretty well, all considered.

"Evergreen" might be a word we can use here.

Can you elaborate about this mindset in the context of vulnerabilities and bugs?

I've been at my current company for maybe 6 years now, in an engineering role.

In that time, the service-oriented code which has significantly outlasted anything else, with nearly no maintenance, has maybe surprisingly or unsurprisingly been: AWS Lambda functions.

We have quite a few Lambda functions (written in Node) that I believe pre-date even my tenure at the company; the only modifications in the interim, maybe a few lines of obscure bug fixes, updating the lambda node version, we switched from JS to TS everywhere a few years ago, and keeping the CI updated on those repos (which is a rather ironic and wasteful investment of resources given they've only changed a half dozen times in something like six years).

Checking in on those dozens of functions handling millions of invocations every month, and seeing them just chug along with no maintenance, has convinced me, personally, that if a problem domain can be solved in a FaaS-like way, its the way to do it. Not every problem can, and I'm not specifically advocating for Lambda vs Azure FaaS vs an open source kubernetes-based solution.

More-so that, every deployment artifact has to draw an "operations" line, to the left of that is the rote, daily operations banality, and to the right is the ever-changing business problem space. The further right you can draw that line, to push as much as possible into operations, the better; it gives you optionality to outsource that left part, maybe to automated software, maybe to AWS, maybe to an in-house operations team. FaaS draws that line roughly as far right as I think a general purpose programming system can; further right would approach things like Excel, Retool, etc.


I think the answer lies in your last paragraph, but your comment made me think: Perhaps Lambda functions are resilient because they're:

- scope constrained

- surrounded by APIs only (e.g. no cheating by having a whole POSIX system)

- infrastructure is maintained solely by a giant company that you pay to not have to think about it

I haven't used any "serverless" stuff, so I don't really know.

Per the article's recommendation, I have noticed at work that resilient software is often a 400 line python script with no other dependencies; or, a medium sized piece of C++ infrastructure that depends only on the OS, the compiler, and a hand-written build system.


> hand-written build system.

Please don‘t. It will be fine as long as a project is very small, but get out of hand quickly. Rewriting the build system after the fact will not be budgeted and likely never happen.


AWS Lambdas are bane of my existance. Many of issues I was involved over years*. From issues with performance, deployments, cold start, api gateway integrations, arbitrary limits or just random failures. Debuging is difficult and whole thing is dog slow. Even today I have two outstanding issues that I need to investigate, lambdas fail to connect to redis and pino not logging correctly response times.

Some of that can be blamed for misusing lambdas, but I hate FaaS concept. AWS ECS, google app engine and k8 where you deploy container are imo better and give you more control.

Edit not hundreds but probably close to 70 in total.


they can be quite usable with something like the Serverless framework.

really though, in my opinion that should be a part of the core AWS solution. Lambda itself is otherwise "low level".


Serverless framework is even worse. Debugging anything in serverless framework is hell. There is no way to print a fully generated CloudFormation template. In many, you need to brute force the solution by committing and re-deploying.

The name “serverless framework” is idiotic and not googleable. It is like naming server side framework “api framework”.

In my opinion, Lambda should only be for small, event based functionality between services. People that sold the idea of full-fledged app development to clueless architecture astronauts should have a special place in hell.


> There is no way to print a fully generated CloudFormation template.

I'm no fan of Serverless Framework either, but accessing the generated CloudFormation template which will be used to deploy the AWS resources is actually pretty straightforward:

    sls package
    cat .serverless/cloudformation-template-update-stack.json

A big problem with Serverless in this context is that then you're Serverless which seems to be declining faster than Lambda now. Same for adding other abstractions on top, they're flavour du jour.

I agree Lambda is too low level, but it probably also explains its longevity.


I've been moving more expensive operations into postgresql functions, Initially writing them has taken a good chunk of time, but they just kind of do the job, much faster too and with low to zero maintenance so far at least.

I don't think I can only credit postgresql functions with the low maintenance though. I suspect that the additional overhead / time to write them has forced me to really make sure they are written correctly (and yet I'm sure there's a lot of optimization I could still do).


Things that get rarely touched but depend on external systems are some of the best candidates for extensive and continuous test suites - as they’re most likely to silently break.

> I've been at my current company for maybe 6 years now, in an engineering role. In that time, the service-oriented code which has [lasted the longest is] AWS lambda functions. We have quite a few Lambda functions (written in Node) that I believe pre-date even my tenure at the company.

Interesting, AWS lambda only launched in November, 2014[0]. So those more-than-six-year-old functions must have been written for a brand new platform.

[0]:https://en.m.wikipedia.org/wiki/AWS_Lambda


The "essentially no dependencies" thing always fascinates me.

Certainly, complex C code is a risk across OS versions but ... perl5 version 8.1 was released just over 18 years ago and after multiple years of annual major releases of perl we're now on version 34. Many of my chosen cpan dependencies produce the exact same output when on a 5.8.1 install as they do on a 5.34.0 install, and I can move them to a new machine/OS via rsync or tar of the local::lib structure (virtualenv like thing configured by setting a couple of environment variables) and it Just Works.

Am I weird here? Is the author? Is python unusually bad? Is perl unusually good? This all seems quite strange to me.


perl is unusually good in this regard, as I'm sure you already knew.

I do suspect that, but my fingerprints are on enough of cpan these days that I would worry about bias if I simply asserted it.

I guess it’s the language maintainers who decide if they want to keep backwards compatibility. And you can’t even say that’s due to language age, necessarily. PHP 8 (Nov 2020) had many breaking changes over past PHP versions and that language is from the mid 1990s. On the other hand, Java also dates to the mid 1990s and code written for Java2 still compiles and executes in Java 17 (2021).

Java is not my cup of anything but I do absolutely respect that about it.

I have a long standing semi-shitpost opinion that if faced with a choice of technologies, it's often best to pick the one that's been "declared dead/dying" because that means it's stable and in wide use.


Maybe some code does, but I've run into JARs that I've had to grab an older version of the JRE to run.

I am with you here. That being said, I believe the author refers to « weaker » libraries (higher-level, more recent etc. )

I have noticed the same thing regarding Python and Perl. It's sometimes difficult to get the same program to even run on adjacent minor versions of Python. There are multiple xkcds about sorting out your Python environment: https://xkcd.com/1987/ https://xkcd.com/2510/ (see title text)

Meanwhile, I have Perl programs deployed that have been running with no changes for over 15 years.


Slight alternative: let smart people who design rock-solid software handle serialization for you, use SQLite to get the benefits of files-on-disk, portability, and make it easier if you use different datatypes in your scripts.

I think SQLite fits the bill described in the post. It's included in Python's standard library, stores data on the file system, and doesn't require an external process.

But it IS an additional dependency.

If you have cPython, you have sqlite.

How is that an additional dependency?


Not all distributions of CPython 3 come with sqlite, as it's an optional feature that is only included when the sqlite library is found at compile time.

The official installers for windows, linux, the app stores and the red hat and debian repo all include it.

At this point, this is nitpicking.

Tkinter I would understand, but not sqlite. Even django doesn't work without it.


The most common scenario in which you may be without the sqlite package is when you build Python from source, which is fairly common nowadays thanks to tools like pyenv.

Also the FreeBSD Python package doesn't come with sqlite by default.


Freebsd and compiling is everything but common. Probably 1 coder for 100000.

If you do that, you are assumed to have removed the safety belt and know what you are doing.

I know we are on HN, but you have to remember half the python coders are not even dev. They are geographers, mathematicians, biologists, students, teachers, data analysts. And of course they are on mac or windows.

Even I, and I started to code in python 2.4, never have to compile python. I do it for fun from time to time, but I don't use that professionnally.

If you use a platform where sqlite is not there, you are an exeception in a niche of minority.

I drive my car assuming there will be breaks on it. It can be missing, but it's not a great assumption.


pyenv as he mentioned is a common tool. That it happens to compile under the hood is an implementation detail that users may not even be aware off.

> pyenv as he mentioned is a common tool.

Pyenv is not a "common tool". It's used by a very small minority of devs since:

- it's used only by dev, and half of the python coders are not devs

- it works only on unix. Most people install python on windows (https://discuss.python.org/t/python-download-stats-for-may-2..., and that doesn't take in consideration anaconda or the appstore). Unless you count the obscure fork used by even less people.

- And yet on mac, if they don't use the official installer, they use mostly brew, not pyenv. On Linux, the officials repos or things like deadnsnake or epel. And of course docker. Pyenv is a tiny fractions of that, because any alternative solution is better known and easier.

- hard to use because it compiles things, fails easily in non obvious ways and assume you manage the path correctly, so it's really, really, popular only with people that want to deal with that. In fact the setups instructions are huge: https://github.com/pyenv/pyenv#how-it-works

- that's just what we see in the field. I go to a lot of different shops every years because I'm a freelancers, and I encountered pyenv twice in 10 years.

> That it happens to compile under the hood is an implementation detail that users may not even be aware off.

Of course not. The setup docs themself tell you to "Install Python build dependencies before attempting to install a new Python version.".

So not only you have to be aware of it, but you must be capable of setting up a compilation environment, and if you fail to do so, it will not work.

pyenv is not standard python. It's not "a common tool". It's a tool for a niche of specialists with domain specific knowledge.


> Of course not. The setup docs themself tell you to "Install Python build dependencies before attempting to install a new Python version.".

Yeah well that just means mindlessly copy pasting a command into your terminal before moving onto the next step in the installation instructions. But now that I read what the command does, it turns out it does install sqlite so yeah..


I built Python 3.9.x on CentOS 7 (yes, for Django) and came across at least two issues with SQLite3 support. It's non-trivial.

Built is the keyword here.

Then you might as well equivalently argue that MongoDB is a reliable basis for LTS software because every distro packages it and there are Python drivers for it.

The initial argument implied that SQLite was totally embedded in Python itself, which is not quite true. It still depends on the availability of separately-maintained & compatible SQLite libraries.


Yes, I was surprised because I thought that what people above were saying.

This really reminds me of an article that i read a while ago, called "SQLite as an application file format", which seemed to express some ideas like that: https://www.sqlite.org/draft/appfileformat.html

Related to this -- many popular packages churn through changes for change-sake. For example LTS on Symfony is just 3 years and on Doctrine, a single contributor is renaming a bunch of standard PHP/DBAL functions for no serious reason other than feels. It would be nice if LTS was more like a 10 year commitment but I know that's not fun for developers and it is a serious drawback to a lot of open source projects. The frameworks have to change because PHP is changing and then you get a stack that's broken and needs multiple upgrades and major refactoring.

No one wants to commit to 10yrs because of income concerns. Software developers tend to not be business folks. I'd give a 10yr commitment on a project that was paying me and required minimal maintenance.

It would be nice if LTS was more like a 10 year commitment but I know that's not fun for developers

I'm surprised that some developers think constant breaking changes is "fun".


Maybe from a "If it hurts do it more often" sense.

This is a pet peeve of mine too. It helps to have a small public api and encapsulate the parts we don’t need and to just wait until the api is stable before going to v1.

The most important things I've learned is that the system has to be (1) self-maintaining (e.g. security updates, rotate logs, alerting monitors so safe to ignore), (2) super easy to make and deploy a change. This one consists of a README that word-for-word works so you don't have to think. Minimizing steps with push-to-github auto-deploy is great, as is an easy setup for a hot-reloading local dev.

Following those two points, I've been able to use whatever languages/tools tickle my interest with little impact on maintenance effort. Dependencies are few, but not minimal, pinned to exact versions. FYI, these have run the range of Vue/TypeScript, Node, Go, Java/Kotlin, Clojure, F#, and Php, with typically MySQL (sometimes multi-master).


> (2) super easy to make and deploy a change. This one consists of a README that word-for-word works so you don't have to think. Minimizing steps with push-to-github auto-deploy is great

Great advice. Minimise the energy barrier to being able to ship a patch or a new feature after neglecting the service for months/years and forgetting how you set everything up.

I always have a bit of a laugh after patching my hobby project and doing a handful of iterative automated deploys while making small changes, then switching back to the day job where aspiring to do a single deploy will typically require hours of meetings and general bureaucratic overhead.

Another possible tip is: understand your deployment environment and learn how to take advantage of what it offers out of the box without adding additional layers or dependencies. E.g. I deploy a hobby webapp to my own debian server. Debian comes with apt for package management and systemd for managing services, so I use both of those (yes, even apt for python package dependencies).


Totally agree. Systemd .service unit files are super simple to understand and setup. Well worth the small time investment.

I find that my old JavaScript projects and tools such as Jsbin break whenever dependencies are upgraded or I update to the next version.

My python code without dependencies just works. My ruby code breaks all the time. My Java code never breaks but my Java code has very few dependencies.

I wish there was a way to abstract the interface of mechanism in a promise that other people can implement and evolve and outsource decision making to someone else who has more skill or nuance in a way that is looser and more reliable than a particular API.

APIs want to evolve over time but a promise to fulfil a high level goal is different. An API is a mechanism like a particular bottle whereas a bottle mould captures the essence of a bottle but is not a bottle itself. Interfaces (such as in Java) are just APIs themselves so they don't really solve the problem. For example, I want to specify routes that correspond to the URL of my React app or I want to encrypt data with the most secure algorithm. These two example mechanisms evolve all the time. I kind of want to abstract the intent and input. An API doesn't have to be a function call, it can be a data structure. The mechanism doesn't matter to me, but I want the mechanism to adapt with evolution. How do you decouple the method/mechanism with the goal? Evolution in nature tries to find a variable to mutate that can produces better survivability. Like higher body temperature in cats which must have an evolutionary advantage for the mutation to have survived.


I definitely agree with OP on this and do my best to minimize maintenance burden for most of my projects. This has worked out pretty well in that almost everything I've published[0] can chug along in the background for months or years without me touching them at all (some of them earn a few hundred dollars per month[1])

>On the server I run Ubuntu LTS until it's near the end of its supported lifetime. Every two years I run do-release-upgrade and move to the next one. This cost is shared over many jefftk.com projects, so it isn't too bad.

I was surprised by this part because running your own server seems like a big maintenance burden. I'd expect that you have to regularly upgrade your packages to protect yourself from security vulnerabilities.

I've avoided this by using PaaS solutions like AppEngine or static hosting + cloud functions. The closest thing I have to a VPS is publishing Docker containers, but that's basically my last resort.

[0] https://mtlynch.io/projects/

[1] https://mtlynch.io/retrospectives/2021/09/#legacy-projects


Application of security patches can be automated, and depending on your application’s dependencies, can be done without worry. I run about 10 bare metal transcoding servers, and it’s zero hassle.

Can you share more about your setup? Is it cronjobs to just apt-get update && apt-get upgrade -y? How do you handle packages that require reboots?

Not OP. My servers get this:

    $ apt install unattended-upgrades

    # /etc/apt/apt.conf.d/99unattended-upgrades-custom
    Unattended-Upgrade::Sender "Root at servername.domain.tld <servername.domain.tld@servicesdomain.tld>";
    Unattended-Upgrade::Mail "services@servicesdomain.tld";
    Unattended-Upgrade::MailReport "on-change";
    Unattended-Upgrade::Automatic-Reboot "true";
    Unattended-Upgrade::Automatic-Reboot-Time "05:00";
    
    $ sudo systemctl edit apt-daily.timer
    # Opens a new file /etc/systemd/system/apt-daily.timer.d/override.conf, paste this content:
    
    [Timer]
    # Reset the system calendar config first
    OnCalendar=
    # Set a new calendar timer with a 60 minute threshold
    OnCalendar=*-*-* 03:00
    RandomizedDelaySec=30m
    
    $ sudo systemctl edit apt-daily-upgrade.timer
    # Opens a new file /etc/systemd/system/apt-daily-upgrade.timer.d/override.conf, paste this content:
    
    [Timer]
    # Reset the system calendar config first
    OnCalendar=
    # Set a new calendar timer with a 60 minute threshold
    OnCalendar=*-*-* 04:00
    RandomizedDelaySec=30m
.. and are auto-updated. When a reboot is needed after an update, it gets rebooted automatically.

In my experience precious few packages on the Ubuntu LTS releases (I'm sure it applies to other distros too) require reboots. It's basically only been kernel upgrades.

There's few user space packages I imagine even benefiting from a reboot. Restarting a long running daemon is pretty painless. Also on a warm running system you've got an active page cache so all the unchanged files that service wants to open are likely cached in RAM rather than read from disk.

While I appreciate the flexibility low effort VMs provide I think they've negatively affected the views of Linux users. Long uptime is fine and restarting a whole machine is rarely necessary. At the same time modern systems with crazy fast SSDs make a reboot take little longer than just restarting a service.


There's honestly very little reason not to run VMs nowadays. It's like using LVM for disks on Linux; virtually zero overhead and you'll be happy you went with it when you do need the flexibility it provides. I'd pretty much run bare metal only if every server is used for a singular purpose that uses 100% of the available resources.

Lots of hardware servers spend several minutes initializing themselves on boot. a RHEL 8 VM on a reasonably powerful host reboots in less than 5 seconds; a service restart causes approximately the same downtime that a reboot does.


I haven’t had a reboot required in over a year, I think. The service is a Go program, and the only OS-managed dependency is ffmpeg. If a reboot interrupts the transcoding jobs, they just retry, so it’s really zero maintenance.

If you're running Debian/Ubuntu then you should look at the "unattended-upgrades" package.

Cool, thanks for the tip! I hadn't seen that package before.

The first backend that I achieved self-hosting of Virgil was the JVM backend. I targeted version 5 classfiles. These classfiles are still accepted by JVMs but there have since been several significant additions to the format. In particular, classfiles are required to have stackmaps, which are pretty much a major distraction for me to add to my backend (so I haven't!).

The second backend that I bootstrapped on was x86-darwin. That worked great for about 7 years, until Apple decided to deprecate first 32-bit, and now, I guess, x86 altogether. Thankfully I ported to x86-linux soon after, as that syscall ABI is solid as a rock.

But you know, things keep changing. To keep up, I finally had to bite the bullet and write an x86-64-linux backend. And I guess at somepoint now arm64 if I ever want to work natively on a mac again.

It's an endless cycle. Thankfully now, emulators are pretty good, but I am still struggling with getting QEMU to run fast on MacOS on the M1, so it's just more futzing with enormous emulation stacks just to run old stuff.


I follow a lot of these practices myself for hobby code, but if I had to offer one change, it would be to use SQLite instead of the filesystem for data. It does add some complexity in languages like Go that don't "easily" link with C libraries, and you do need to remember the table schemas that you're using, but for me the benefits are worth it. I never worry about locking files, naming some key badly, or any of the other issues that come with flat file dbs.

SQLite is an absolute powerhouse. Filesystem interop is the best way for it to start proving itself in any codebase. Loading assets for a AAA game straight from disk? You would have a really hard time doing better than SQLite - especially if you have millions+ of unique items to contend with. In many cases, SQLite is faster than the actual file system it resides upon [0].

We use SQLite for nearly 100% of business object persistence in our product today. The performance & scalability arguments fall on some very deaf ears after years of experience optimizing and succeeding with this stack in production.

  [0] https://www.sqlite.org/fasterthanfs.html

I've been thinking about a bare metal Forth system and one of my goals for it would be that anyone could recreate it without having any specific dependencies to build it. Something fully explained like jonesforth but without a linux and assembler dependency. Part of my motivation for that is when I try to learn bare metal programming, the tutorials are rapidly outdated because of changing dependencies/tools, broken links, outdated information, etc. Maybe just some hex codes of the machine language which you could use xxd to convert to a binary, or if you didn't have that you could use any programming language that can write a binary file to create it, or a hex editor. The ultimate step would be that the system would be able to compile the next version of itself. (Self-hosted forth implementations are really common; you can find a bunch by googling that.)

Look into CollapseOS if you haven’t already!

https://collapseos.org/


Wow that's cool. I'll definitely be checking it out. Ironically though, it illustrates one of my points: the documentation link is already broken (although it also exists in the download). It says it runs on a POSIX environment and uses cc so it still has more dependencies, but definitely worth digging into and a step in the right direction. Thanks.

I was just telling someone[1] that I explored Forth for this exact purpose a few years ago, but ended up with a more imperative, statement-oriented approach: https://github.com/akkartik/mu

[1] https://merveilles.town/@akkartik/106978783913724906


Bare metal programming in C99 hasn't changed in... Ever really, so that's another option if you want something less esoteric than Forth. (Though Forth is a lot of fun and will give you a dynamic experience that few other environments at this level of the stack can offer.)

Look into "SeedForth".

I've written SDKs that have lasted for decades.

Number of reasons why. Most don't stay static, though. People mod them. They need to be something that can be handed off to a new team.

I find documentation is key.

I write about that here: https://littlegreenviper.com/miscellany/leaving-a-legacy/


Using a filesystem is underrated these days with web this or that. I'm debating relying only on the disk for my Adama platform (http://www.adama-lang.org/) where part of my thesis is that by combining mem, cpu, storage into a single node that ill get decent utilization once I figure out a durability story sorted out.

I personally find myself reaching more and more often to SQLite as an in-between. I like having access to basic SQL and not needing to care about filesystem details for reliability is nice. And it is part of Python standard library so I need literally nothing added.

A few years ago, I made an extremely simple chat bot for Telegram that gave artists access to Spotify For Artists' API. I've used a simple Lambda and that's it.

I've never touched it again and I've found out recently that I now have a user base there and everything still works :) my longest living software ever was made in a couple hours. A little frustrating, but awesome nonetheless.


Lindy Effect says that software will continue working for as long as it's already been working so far.

I use this to my advantage by writing software AS IF it has already been working for 20 years, meaning I use only languages and technologies which have not introduced breaking changes in 20 years or so.

It's worked out quite well so far!


Which language or technology has not introduced breaking changes in the last two decades? Windows has good backward compatibility, but even Windows can't run really old programs. C++ has evolved dramatically in the last 20 years. Even C had changes. Do people still use tables to create websites, or jquery?

Now, to be fair, I'm talking about carefully writing code with the knowledge of the previous 20 years, which is a little bit like "cheating", whatever that concept means here.

I use ASCII/textfiles, HTTP, CGI, SSI, access.log, Perl, Bash, and small subsets of PHP, HTML, JS, CSS, and almost no third-party libraries. Certainly no JQuery or anything similar.

I'm not hardcore enough for C or C++, but I know that a program written 20 years ago in these languages would most likely compile today.


> a program written 20 years ago in these languages would most likely compile today

Not without continuous attention over those 20 years.

I'm currently struggling to get a C program last touched 10 years ago to compile.


Like I said, I'm cheating a little bit, and using the knowledge I have today to "retro-edit" this hypothetical 20-year-old program.

SQL. Well mostly.

Oh yeah, forgot to mention that! Thanks

I wrote a static website for a family business using vanilla html - it has been up and running for several years.

I also have an (abandoned) website using a markdown to html converter. I've switched converters two times because they eventually broke over the years, and finally moved to plain pandoc + script the last time. Even then, the CSS somehow broke and I haven't fixed it yet.

Maybe my next project will be to convert it to html-only and actually add updates instead of just dreading to pick it up and find another broken tool.


Re: dependencies, for "low-upkeep" projects I also like pinning specific patch numbers instead of doing any kind of semver pattern

Also +1 to using files as a DB for small-scale projects. Something I've done a few times is keep a single in-memory JS object as the source of truth, and just write it to disk as JSON on a certain cadence/read it from disk at startup. It's wonderful having your little one-off server not need to be a distributed system from day 1.


SemVer was specifically for libraries. It's really pointless to adhere to it if your code won't be imported into software projects of other people.

I meant for my dependencies, so they don't automatically change underneath me, even in ways that are harmless according to semver

My only point was that SemVer is unnecessary if you're the only consumer of that code.

A simple integer or a date would work just as well and would be much easier to automate.


Maybe, but that's not at all the situation I'm talking about

On this topic, what happened with https://www.jefftk.com/nextbus/ ? Seems to be a 404 now. Was there a breaking change or disruption in the nextbus API?

I loved its bare-bones UI. It was definitely one of the most usable webapps. I assume its a11y was nearly perfect as well.

If anyone would like to see it, here's an archived snapshot:

https://web.archive.org/web/20170622033104/https://www.jefft...

I also admire https://diskprices.com/ for the same reasons.


Well, it probably starts with a stable secure cross-platform language and ecosystem.

Things like this are best explained by a very grey neckbeard who's been through a lot.

Author is not particularly old, author programs in python (something that has broken compatibility once)... "stick to the standard library" is about at ground level an opinion on this subject that a typical HN probably thought of as they clicked the link.

1) need a stable, mature, secure language

Stable eliminates Rust, Python, Perl. Go hasn't been around long enough.

Secure eliminates C, C++, probably D and most compiled languages

No dependencies: yikes there goes Java because of the JVM dependency.

Assembly isn't portable.

...Seriously, are we left with Ada? Secure, strict, mature, reasonably portable. And virtually unknown outside of government/military.

2) UIs age, but CLIs really don't

So make it a CLI, and use unix philosophy

3) OS churn

Hopefully with the practical cessation of gigahertz scaling, the industry will settle down. Then again, the ARM hordes are approaching to wipe out all the x86 stability in desktops and servers.

Perhaps the best thing for this in the industry has been Windows subsystem for Linux. If we could get an OSX subsystem for Linux, that would be a real boon.

Even Linux on it's umpteenth release is still changing quite a lot, and NIH reinvention like Wayland and Systemd and toss-it-all-out Gnome 3. Well, it kind of keeps us employed. I'd adhere to assuming a unix-ish os, like maybe POSIX.

So: Ada, CLI/text, POSIX compatibility. That's not limiting at all, is it?

Sigh.


> 1) need a stable, mature, secure language

> Stable eliminates Rust, Python, Perl. Go hasn't been around long enough.

> Secure eliminates C, C++, probably D and most compiled languages

> No dependencies: yikes there goes Java because of the JVM dependency.

Common Lisp, then?


Almost, but Java has a VM. Yes you can technically compile Java to an executable but it's not done much. CL has a VM too, does it not?

CL supports whatever implementation technique you want: hardware Lisp Machines with compiling into LAP, native compilers with a small runtime, bytecode compilers - with OR without JITting native code, or simple AST interpreters. Some implementations even support multiple techniques at once (such as CMUCL which supports native compilation, bytecode compilation and interpretation, and direct interpretation). The standard doesn't prescribe implementation techniques.

I noticed on one of the files he uses lxml, that is not part of the standard library is this one of those do as I say not as I do?

https://github.com/jeffkaufman/webscripts/blob/b6a004790b464...


When I started this script ~12y ago there wasn't any good html parsing option in the standard library, and that was worth taking a dependency for. I don't see a conflict there with "minimize your dependencies and limit them to ones that value backwards compatibility"?

An interesting take-away from this is designing low upkeep business services. I have been dealing with trying to onboard to a service at work, and there are so many dependencies! Several are pinned to versions that require python2, but I can't install python2 because another service I use calls "python" and assumes it's linked to python 3!

And sometimes someone will add a new feature, and include an entire library for that single method call, instead of trying to write a little JavaScript.

I almost think there should be a non-trivial process at most businesses for including new libraries so that web devs would try to do it without adding a new library, so they didn't have to deal with red tape!


I don't know about waiting so long on updating Ubuntu, or essential server software. As an Arch Linux user, I've come to loathe outdated packages that break, which is ironic because Arch is notorious for breaking on new updates. From personal experience, the latter happens less frequently than the former.

I don't like the solution for comments. I think there's real value in having a discussion that is directed at and moderated by the author. For example, if an author writes a post that people on the forums from which he aggregates comments generally disagree with, I don't want to see one comment after another denouncing him. I want to see comments from people who at least respect him enough to write a comment on his blog, directed at him.

Would a wordpress blog really be harder to maintain then this? If so, I think a better solution would be a mailto link, so that the email contains some basic text like this:

> My comment on blog.com/article3:

> ...

Then maybe you could automate adding the email as a comment to the article in some way.


If you accept comments, you will have to deal with spam. That is a very-high maintenance problem.

I've considered building my own comment system, and I think I could do it in a low-maintenance way. The main reason I haven't is that I actually like the pattern where discussion happens on facebook/lesswrong/hn/etc. I get wider and more interesting feedback that way.

Very nice writing except the python part. Just 2to3 really have been a maintenance pain for me. The hardest part with script languages to actually know if somethings is wrong or not without extensive testing.

Writing something maintenance free should be done in a compiled language.


I think it is not necessarily about being compiled versus interpreted. It is more about static typing, and the type-related issues being revealed and fixed right at the beginning instead of only when the user invokes the problematic branch of code at runtime.

On the other hand, one of most long-lived and apparently low maintenance pieces of software I've ever used so far has been GNU Maxima, which is not written in a statically typed language.

> Very nice writing except the python part. Just 2to3 really have been a maintenance pain for me.

I agree. I could have been talked into getting onboard with Python for long-term low-maintenance projects if only they didn't burn everybody with the backwards compatibility. I understand of course why the changes were made, but it doesn't make it any easier. It's not impossible that 3 -> 4 will do the same.

I was wondering the other day whether something that is already dead might make a better scripting language. For example, you haven't got to worry about any random changes to the COBOL language! I suspect something like JS is pretty much at maturity now with WebASM poised to replace it.


I doubt 3->4 will be too much of a shock, iirc the only planned breakage will be switching annotations to have delayed evaluation so annotations in things like classes can reference the thing that's not yet defined

> The biggest piece is minimizing your dependencies, and limiting them to ones that value backwards compatibility

Three.js, one of the most popular JS libraries, has the explicit policy of ignoring backward compatibility.

https://github.com/mrdoob/three.js/wiki/Migration-Guide

I find it not only irresponsible but arguably mean to other devs time and lives but apparently I'm in the minority.

Every time I have to spend an afternoon or evening updating stuff instead of doing something new or visiting friends etc I silently curse the devs


You curse the devs who build excellent libraries and give them away for free, because they do not accept the additional burden of never making changes that would require you to adjust your usage?

You could choose not to use such libraries, but as long as they're up front about it I don't see how you can hold it against them.


it upsets me because they are the only library of 100s I use that do this. They consciously break things every month. they don't give a fuck how many man hours it burns of other people's time. No other project I know of does this.

It's not just time using the library. Every volunteer that wrote docs or a tutorial their work is now out of date. every answer on S.O. is more than 3-4 months old is wrong. Every new user now has the burden of trying to learn where any content they find is practically guaranteed not to work correctlt with the current version.

it's mean in the extreme to piss on so many people


I'm sure you know this but for the record, they don't actually ignore backward compatibility, it features often as a concern in the dev discussions and they're conscious of when they break it, and there can be deprecations[1].. it's a conscious tradeoff allowing development to go faster.

I'd argue Three.js beats native to make low upkeep & long lived end user (=portable) 3D apps, given all the churn, constant driver breakages necessitating new workarounds, proprietary-ness and fragmentation in 3D APIs. With Three you update to newer versions on your own time (if ever) and stuff doesn't break from under you. And in the decades long view WebGL 1 / WebGL 2 / WebGPU is used behind the scenes.

[1] example: https://github.com/mrdoob/three.js/pull/21777


> This is not how I approach design at work, or necessarily how I would recommend doing so. Having a budget where initial creation is essentially free (fun!) while maintenance is extremely expensive (drugery!)

For fun personal projects I like using php to manipulate static html documents. I have quite a few projects with broken admin areas written in ancient php but (besides from user contributions) the html/css/js/xml/json just continues to work as if nothing happened. Sometimes I just delete the admin tools.

Good advice for personal projects. Sadly you can't use a lot of free libraries out there, because more often than not you'll be forced to upgrade (possibly due to a security vulnerability that the author refuses to backport to "old" major revisions), and encounter a backward-incompatibility when upgrading.

Few weeks ago I wrote a lengthy post here on HN about how to write low upkeep software in PHP

https://news.ycombinator.com/item?id=28232146


I wonder if there's a way to do server-side javascript programming with this philosophy.

For client-side I can just throw together some html and javascript to make a simple app. But if I want to use javascript on the server it seems like I need to download a bunch of npm dependencies to do anything useful.


Sure, you can do it on the server with Node. Node has `http` as part of its standard library and you can write a simple webapp without third party dependencies. Check this tutorial for example [0].

The problem is that Node's http lib isn't super productive and you gotta write some "solved" stuff like cookie handlers. Also, there is no database standard library. If you want to persist data, you could use files. If you need a database (even SQLite), you gotta use third-party stuff.

In comparison, PHP has a lot of stuff build-in and you can be more productive with zero dependencies in PHP.

But then, maybe it doesn't really matter if you include a well-maintained npm package or if you have a more powerful standard library that has some features deprecated after a while. It's still "other people's code", although npm packages require some kind of installation, whereas a standard library usually doesn't.

[0]: https://www.section.io/engineering-education/pure-node-js-no...


There was a trend a few years ago (maybe still going) that languages with a large standard library, like PHP, is bloated. That we should strive against minimalism and then just share dependencies when needed. But then the nodejs left-pad debacle happened.

I used to be more on the minimalist side, but over time I have switched to the opposite, today I can really appreciate the bloated nature of PHP, everything from large projects to small scripts just becomes easier to handle.


I think the idea of we need continuous improvements to maintain a steady userbase is a myth. And it is disturbing to me that we talk about browser support and the global reach internet services yet I struggle to access many popular websites on my galaxy S2.

For https://biokeanos.com (biomedical data catalog and discovery tool), keeping things on disk and versioning by pushing to repo saved me months already on dubugging and maintenance only.

I have been thinking of such setup lately for another piece of software. Is your sistem distributed in any way? If so, how do you resolve conflicts in version control?

Designing for minimum maintenance is nice when you're working on something for "yourself"... not so much if you're creating something for someone else. Since designing for minimum maintenance takes additional time (and money) and most stakeholding entities (contractee, employer) will want the job to be done as soon as possible (and only THEN worry about upkeep), this can easily prevent you from "doing the right thing" simply by the nature of competition. Not to mention the elephant in the room - that minimum maintenance often means less money for the person doing the maintenance, i.e. "you". Vendor lockin is an issue only if you're not the vendor.

One of the reasons the whole server less philosophy appeals. Stuff can still go very wrong but at least there isn’t a server that is your problem when things go sideways.

Beautiful code denies its existence so perfectly you forget you wrote it.

php+mysql has been low upkeep for at least 15 years is that enough

Sounds good for personal projects.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:
0
Looking